community note
X's latest Community Notes experiment allows AI to write the first draft
X's latest Community Notes experiment allows AI to write the first draft The platform is testing collaborative notes. X wants community notes writers to collaborate with AI. (X Corp.) X is experimenting with a new way for AI to write Community Notes. The company is testing a new collaborative notes feature that allows human writers to request an AI-written Community Note. It's not the first time the platform has experimented with AI in Community Notes. The company started a pilot program last year to allow developers to create dedicated AI note writers.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.78)
- Information Technology > Communications > Social Media (0.53)
MOSAIC: Modeling Social AI for Content Dissemination and Regulation in Multi-Agent Simulations
Liu, Genglin, Le, Vivian, Rahman, Salman, Kreiss, Elisa, Ghassemi, Marzyeh, Gabriel, Saadia
We present a novel, open-source social network simulation framework, MOSAIC, where generative language agents predict user behaviors such as liking, sharing, and flagging content. This simulation combines LLM agents with a directed social graph to analyze emergent deception behaviors and gain a better understanding of how users determine the veracity of online social content. By constructing user representations from diverse fine-grained personas, our system enables multi-agent simulations that model content dissemination and engagement dynamics at scale. Within this framework, we evaluate three different content moderation strategies with simulated misinformation dissemination, and we find that they not only mitigate the spread of non-factual content but also increase user engagement. In addition, we analyze the trajectories of popular content in our simulations, and explore whether simulation agents' articulated reasoning for their social interactions truly aligns with their collective engagement patterns. We open-source our simulation software to encourage further research within AI and social sciences.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Kosovo > District of Gjilan > Kamenica (0.04)
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.04)
- Asia > China > Heilongjiang Province > Daqing (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Media > News (1.00)
- Government (1.00)
- Health & Medicine (0.93)
As millions adopt Grok to fact-check, misinformation abounds
On June 9, soon after United States President Donald Trump dispatched US National Guard troops to Los Angeles to quell the protests taking place over immigration raids, California Governor Gavin Newsom posted two photographs on X. The images showed dozens of troopers wearing the National Guard uniform sleeping on the floor in a cramped space, with a caption that decried Trump for disrespecting the troops. X users immediately turned to Grok, Elon Musk's AI, which is integrated directly into X, to fact-check the veracity of the image. For that, they tagged @grok in a reply to the tweet in question, triggering an automatic response from the AI. "You're sharing fake photos," one user posted, citing a screenshot of Grok's response that claimed a reverse image search could not find the exact source.
- North America > United States > California > Los Angeles County > Los Angeles (0.24)
- Asia > Middle East > Iran (0.14)
- Asia > Afghanistan (0.05)
- (8 more...)
Fears AI factcheckers on X could increase promotion of conspiracy theories
A decision by Elon Musk's X social media platform to enlist artificial intelligence chatbots to draft factchecks risks increasing the promotion of "lies and conspiracy theories", a former UK technology minister has warned. Damian Collins accused Musk's firm of "leaving it to bots to edit the news" after X announced on Tuesday that it would allow large language models to write community notes to clarify or correct contentious posts, before users approve them for publication. The notes have previously been written by humans. X said using AI to write factchecking notes – which sit beneath some X posts – "advances the state of the art in improving information quality on the internet". Keith Coleman, the vice-president of product at X, said humans would review AI-generated notes and the note would appear only if people with a variety of viewpoints found it useful.
Can Community Notes Replace Professional Fact-Checkers?
Borenstein, Nadav, Warren, Greta, Elliott, Desmond, Augenstein, Isabelle
Two commonly-employed strategies to combat the rise of misinformation on social media are (i) fact-checking by professional organisations and (ii) community moderation by platform users. Policy changes by Twitter/X and, more recently, Meta, signal a shift away from partnerships with fact-checking organisations and towards an increased reliance on crowdsourced community notes. However, the extent and nature of dependencies between fact-checking and helpful community notes remain unclear. To address these questions, we use language models to annotate a large corpus of Twitter/X community notes with attributes such as topic, cited sources, and whether they refute claims tied to broader misinformation narratives. Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported. Fact-checking is especially crucial for notes on posts linked to broader narratives, which are twice as likely to reference fact-checking sources compared to other sources. In conclusion, our results show that successful community moderation heavily relies on professional fact-checking.
- Media > News (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Immunology (1.00)
- (5 more...)
Correcting misinformation on social media with a large language model
Zhou, Xinyi, Sharma, Ashish, Zhang, Amy X., Althoff, Tim
Real-world misinformation can be partially correct and even factual but misleading. It undermines public trust in science and democracy, particularly on social media, where it can spread rapidly. High-quality and timely correction of misinformation that identifies and explains its (in)accuracies has been shown to effectively reduce false beliefs. Despite the wide acceptance of manual correction, it is difficult to be timely and scalable, a concern as technologies like large language models (LLMs) make misinformation easier to produce. LLMs also have versatile capabilities that could accelerate misinformation correction-however, they struggle due to a lack of recent information, a tendency to produce false content, and limitations in addressing multimodal information. We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information. By retrieving evidence as refutations or contexts, MUSE identifies and explains (in)accuracies in a piece of content-not presupposed to be misinformation-with references. It also describes images and conducts multimodal searches to verify and correct multimodal content. Fact-checking experts evaluate responses to social media content that are not presupposed to be (non-)misinformation but broadly include incorrect, partially correct, and correct posts, that may or may not be misleading. We propose and evaluate 13 dimensions of misinformation correction quality, ranging from the accuracy of identifications and factuality of explanations to the relevance and credibility of references. The results demonstrate MUSE's ability to promptly write high-quality responses to potential misinformation on social media-overall, MUSE outperforms GPT-4 by 37% and even high-quality responses from laypeople by 29%. This work reveals LLMs' potential to help combat real-world misinformation effectively and efficiently.
- North America > United States > Washington > King County > Seattle (0.14)
- South America (0.04)
- North America > Central America (0.04)
- (4 more...)
- Media > News (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
So, Fake Images of Trump With Black Voters Are a Thing Now
Recently, Donald Trump fans in Florida and Michigan have been auto-generating and spreading around faked "pictures" of Trump surrounded by crowds of Black supporters--and earning significant traction for doing so. Coming at a time when President Joe Biden is worried about losing the Black voters who came out for his 2020 election, the Trump images have become a whole new subgenre of A.I. sludge. And no one in any position of power appears to know what to do about it. Last month, BBC Panorama reported on the proliferation of these deceitful likenesses. The first example displayed Trump at a Christmas party with his arm around a couple of Black women, one of whom is seen wearing a Pen & Pixel–style tank; another shows him sitting on a house porch with six young Black men, smiling with his hands clasped.
- North America > United States > Michigan (0.25)
- North America > United States > New Mexico (0.07)
- North America > United States > Wisconsin (0.06)
- (4 more...)
Aligned: A Platform-based Process for Alignment
Shaotran, Ethan, Pesok, Ido, Jones, Sam, Liu, Emi
We are introducing Aligned, a platform for global governance and alignment of frontier models, and eventually superintelligence. While previous efforts at the major AI labs have attempted to gather inputs for alignment, these are often conducted behind closed doors. We aim to set the foundation for a more trustworthy, public-facing approach to safety: a constitutional committee framework. Initial tests with 680 participants result in a 30-guideline constitution with 93% overall support. We show the platform naturally scales, instilling confidence and enjoyment from the community. We invite other AI labs and teams to plug and play into the Aligned ecosystem.
- North America > United States (0.14)
- Asia > China (0.04)
- South America > Chile (0.04)
- (27 more...)